Invariant Stochastic Encoders

نویسنده

  • Stephen P. Luttrell
چکیده

The theory of stochastic vector quantisers (SVQ) has been extended to allow the quantiser to develop invariances, so that only"large"degrees of freedom in the input vector are represented in the code. This has been applied to the problem of encoding data vectors which are a superposition of a"large"jammer and a"small"signal, so that only the jammer is represented in the code. This allows the jammer to be subtracted from the total input vector (i.e. the jammer is nulled), leaving a residual that contains only the underlying signal. The main advantage of this approach to jammer nulling is that little prior knowledge of the jammer is assumed, because these properties are automatically discovered by the SVQ as it is trained on examples of input vectors.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the Structure of Real-Time Encoders and Decoders in a Multi-Terminal Communication System

A real-time communication system with two encoders communicating with a single receiver over separate noisy channels is considered. The two encoders make distinct partial observations of a Markov source. Each encoder must encode its observations into a sequence of discrete symbols. The symbols are transmitted over noisy channels to a finite memory receiver that attempts to reconstruct some func...

متن کامل

Optimal and near-optimal encoders for short and moderate-length tailbiting trellises

The results of an extensive search for short and moderatelength polynomial convolutional encoders for time-invariant tail-biting representations of block codes at rates R = 1=4; 1=3; 1=2; and 2=3 are reported. The tail-biting representations found are typically as good as the best known block codes.

متن کامل

Learning invariant features through local space contraction

We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jac...

متن کامل

Optimal and near-optimal encoders for short and moderate-length tail-biting trellises

The results of an extensive search for short and moderatelength polynomial convolutional encoders for time-invariant tail-biting representations of block codes at rates R = 1=4; 1=3; 1=2; and 2=3 are reported. The tail-biting representations found are typically as good as the best known block codes.

متن کامل

Learning Discrete Representations via Information Maximizing Self-Augmented Training

Our method is related to denoising auto-encoders (Vincent et al., 2008). Auto-encoders maximize a lower bound of mutual information (Cover & Thomas, 2012) between inputs and their hidden representations (Vincent et al., 2008), while the denoising mechanism regularizes the auto-encoders to be locally invariant. However, such a regularization does not necessarily impose the invariance on the hidd...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره cs.NE/0408050  شماره 

صفحات  -

تاریخ انتشار 2004